Picture
Now, I am a cognitive scientist, which places me in a larger set of people: that consisting of scientists.  Science is itself a mushy label with an equally mushy set of meanings.  We have enough pseudo-sciences in existence to rival the whole accomplishments of what we would consider "real" science, as well as an equal number of people who would gladly hold their pseudo-science up against the most complicated tenets of string theory and quantum physics.  Point is, what makes a field science is rather poorly defined, and I honestly don't see any rigorous or formulaic means of simplifying and defining the distinction itself.

What we can all agree on, at least to some extent, is that science is concerned with questions.  These questions are big and small, perplexing or perhaps simple, of interest to us or not.  Questions of matter, of existence and metaphysics, the nature of life, thought, our bodies and our minds.  Questions of how we came to be, how things came to be the way they are, and how things could conceivably change or become.  Science is composed of questions of all kinds.

Picture
Humanity, and even more so its scientists, find themselves in the middle ground of the universe and its questions.  On a grand scale, we are caught between the beginning of time and the end of time.  We are always wandering between ignorance and complete enlightenment, looking back at what we've learned through eons of experience and study as well as staring forward into all of the things we have yet to understand to even the slightest iota of truth.  On a more individual scale, we display the same patterns of middle ground-ing.  We neither always love, nor always hate.  We are intelligent in some places, idiotic in others.  We think we're better than some people, others we wish we could be instead.  We're brave...sometimes.  We're also cowards too of course.  We're a middling and meddling species no doubt.

Picture
So it was a sort of "philosophical justice" when I realized that, in my mind, we are once again caught in the middle ground of two of the major questions facing science and humanity in the 21st century: the workings of the mind and the depths of the universe.  The study of the human mind can be considered very roughly along two levels of analysis: high-level function and behavior, and biological/physiological changes that occur on the neuronal level.  Regardless of which level of analysis you choose to focus on, the closeness of the object of understanding is as close to us as anything is.  It forms the central core of any human and animal, and any conception of existence would be (quite literally) impossible without it.  We can poke it, dissect it, (try to) replicate it in computers, hold it in our hands and take notes about its squishiness, and observe the end results of its functionality in ourselves and others.  The mind is as close to us as anything could ever be.

Stretching beyond our ability to touch, feel, or grasp is the sheer immensity of the universe.  We have barely scratched the edges of our own atmosphere and remark to ourselves about how great an accomplishment that is (and don't get me wrong, it is a great accomplishment), and yet we all realize that we have only descended one inch into the depths of a great ocean of stars.  Out there is something we can't grasp, something we can't hold or manipulate with our fingers.  We can't poke it, we can't dissect it.  We are instead largely confined to staring at it and marvelling at its splendor and awesomeness.  It is as far from our grasp as anything could ever be.

Picture
Through this stark constrast, they share an important attribute: they embody two of the great mysteries that challenge the human race.  As I mentioned before, science is about questions, and they don't get much bigger than this.  To put it succinctly, the big questions for us are "what is the nature of that which is outside of us and beyond our grasp?" and "what is the nature of that which occurs inside of us, within the confines of our own bodily limits?"

So we are in the middle of two great mysteries, searching in both directions for answers to that which is closest to us and that which is farthest from us.  In a telling moment from the Tron movie series, we are treated to two opposing rallying cries:

"In there...is our destiny."
"Out there...is our destiny."

For humans and scientists, whatever our destiny may be, it does not appear to have drawn any such distinctions between what is in there and what is out there.  It is simply destiny.
 
Picture

It should be readily apparent to people that there is a very strong connection between language and cognition.  The way we use words to communicate ideas offers us insight into cognitive states and processes that would not become that obvious were it not for either there repeated occurence in natural language as a "rule" or their occasional, seemingly unregulated occurence as an "exception."  Not only is there key insight into the mind that can be found by studying the consistencies that have developed seemingly independently across all human languages (concepts of nouns, verbs, adjectives, etc...), it is important to note the effect words can have on the very cognitive processes long thought to produce natural language.


Our thoughts, feelings, and emotions can be influenced simply by the way an idea is being conveyed to us.  For instance, consider the following utterances:


1. "the presentation was unsuccessful"
2. "the presentation was an utter catastrophe"
3. "the presentation was poor"


These all could refer to the exact same presentation, even possibly from the same perspective.  In conversation, one could coherently utter all three of these sentences, and it would not be apparent to the listener that there was any form of inconsistency being observed.


However, the influence the word selection has on an observer who is not familiar with the actual presentation to which the sentences refer to is quite stark.  (1) makes the presentation seem like it was generally not well accepted by the audience, although by no means on the scale of being terrible.  (2) seems to make the presentation look god-awful, as if the presenter was chased off the stage with rotten tomatoes.  (3) seems to walk a middle ground between the two.  (3) may even be more special than that, referring to how well put together a presentation is as opposed to (1) and (2) that may just refer to how well it is received by the public (as anyone who's done presentation's knows, these are two very distinct concepts).


Point is, the way an object is referred to linguistically influences our mental conception of that object, even if we had direct viewing access to it.  A famous study (which I will regrettably fail to cite) illustrates the effects a linguistic description of an accident has on the representation of the severity of the accident in witnesses, even if the witness should have had a first hand account of the accident's severity.


Extending this argument, we can look back at some of history's most influential people.  To start with a very stark example, consider Adolf Hitler.


One of the primary architects of evil, he defined what it meant to be "evil" in the 20th century, and continues to do so today long after his death at the end of World War II.  He was responsible for millions of deaths and untold suffering during a relatively short period of time.


For all of the evil he caused, he himself probably did not pull to many triggers during that period.  Rather, he was able to persuade others to do it for him.  He held his country in a state of rapture, his firebrand leadership capturing the heart and eventually brainwashing a nation into blind acceptance of what the rest of the world (and with time the German citizens themselves) would regard as insanity.


What gave him such power over a nation was his tongue.  It did not matter how radical his ideas were, or what evil they contained and could potentially result in.  It was what he said, his words and their effects, that brought a nation to its feet, and eventually, to arms.  It was his words that turned a country against a group of people, and painted them as menaces and degenerates.  It wasn't his fingers that pulled the trigger, it was the ideas planted in the minds of his followers with words that did it.


In effect, words were the weapon which he used to nearly destroy Europe, and possibly the world had he succeeded.


It is a weapon we are all armed with, to varying degrees of skill.  It is deadly in the most powerful of ways: while it probably can't hurt your body, it can influence the mind in ways you don't even realize.


The pen really is mightier than the sword, and is only going to become more so with the advent of mass communication.  In our country, we protect the use of language with law.  It is a weapon we are entitled to, and fairly so.


This only means that we, as observers to so much information and ideas, must constantly be on guard concerning the things we read or listen to.  Fortunately, we all can use words freely, and understand them relatively well.  Unfortunately, understanding the use of words does not guarantee a successful defense against dangerous and harmful ideas.


Language is a weapon.  True understanding of the semantics of words can be a shield.

 
Picture
The statistical revolution changed the face of modern AI, and resulted in a desertion from rule based systems, in favor of the promise that pure statistical processing had.  Vast data corpuses are en vogue, processed with counting, totalling, probabilizing, and any other kind of statistical mechanism that yielded incredible performances for particular tasks.


It seems as though there is no limit to what you can do with statistical processing.  Natural language, rather than being viewed as an inherently psychological and linguistic domain that struck at the heart of what it means to be conscious at a human-level, has become a stomping ground for blind numbers to run about unchecked.  Powerful, simple, and very reliable, it's where the funding is.But, without being long-winded, I'm going to tell you that statistical processing will never realize by itself the grand hopes we once had.


Those researchers in AI once had these dreams of truly trying to understand the human psyche, what it means to be conscious, and a host of other philosophical quandaries that for eons have been the sole domain of armchair thinkers and dreamers.  Somewhere along the way, many compromised and followed the money, the promise, and the success of blind numbers.


These numbers are just that though: blind.  The first generation of AI researchers failed because they believed that they could continuously throw rules and facts at real world problems, and eventually their store of explicit cases would be so great it would match the human computational capacity and consciousness or something like it would result.Time has shown this to be impossible, and fundamentally flawed.


Why?  Because you can't brute force your way to an understanding of human nature!


Humans are intricate, delicate, and defiant of what rules we could ever make explicit it seems.  Especially when those rules are...blind.Purely statistical methods will hit a performance ceiling that is the result of the same fundamental philosophical flaw: pure numbers are blind.  They do not understand the concept of what it means for something to have edges, to have color, to have a taste or touch or feel.  Numbers can encode things, but they can't embody them.  Statistical methods can analyze data, but they can never truly generate data themselves.  Statistical methods can only pretend to be generative of concepts, but can never truly be generative on a level that we could consider human.


This is not to say that rule based methods are the answer (and in fact, I do not believe that human cognition will ever be solved, be it by rule-based or statistical methods), but to hold statistics as the key is just misleading oneself in to the delusion that success is just around the corner.


Numbers are blind!  You can't brute force your way to understanding nature!


The ceiling someday will be hit.  Ultimately, you will need rule based systems, with statistics, in order to continue making progress.


That's my prediction.


Understanding the human mind better is not a simple game of blind numbers and blind rules.


It's the game understanding ourselves.

 
I'm currently working on developing models for a cognitive architecture called 'Polyscheme', and having quite an adventure with it.


I liken the architecture to riding a horse.  There is a lot of machinery going on under there, that for the most part we don't understand.  Just starting out with your partner in crime, it doesn't seem like it listens to you on even the most basic tasks.  This is not the fault of the horse, but rather a consequent of your inexperience with mounting and riding the animal.


Horses don't like to be abused, and neither does Polyscheme.  If you do mess with it, it messes with you back.  It misbehaves.  It doesn't do what you want or expect.  It spits stuff back out at you in ways you don't understand.


Communicating with it at first can be tough.  However, like many difficult tasks of this nature, the instinct of knowing what the horse wants or feels is one gained with constant work and training.  You'll feel for the longest time that you're not making progress, and then *snap* all of a sudden you just get it.  You can't put words to it, you can't explicate it to a bystander who hasn't shared a like experience, but you just get it.


They are both magnificent in their own way, capable of things we can't even imagine ourselves.  But learning to communicate in a meaningful way so as to reach each other at a sub-linguistic level is a daunting task.


You can't learn to ride the animal any other way than getting on and trying not to die.  You can't learn to model any other way than jumping in and dragging your mind through the mud.


Consider it like language-learning through immersion.
 
I happened across an article today, one that reports that researchers have developed a robot that can "deceive".  The link: http://www.wired.com/wiredscience/2010/09/robots-taught-how-to-deceive/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+wired/index+(Wired:+Index+3+(Top+Stories+2))&utm_content=Google+Feedfetcher


A notable quote from Ronald Arkin in response to questions concerning how "ethical" it was to develop a robot with the capability of deception: 

“We have been concerned from the very beginning with the ethical implications related to the creation of robots capable of deception and we understand that there are beneficial and deleterious aspects. We strongly encourage discussion about the appropriateness of deceptive robots to determine what, if any, regulations or guidelines should constrain the development of these systems.”


Before I begin discussing this briefly, I feel it necessary to point out that it is a little bit premature to be discussing how dangerous it to have robots that can deceive in existence.  If you pay close attention to the wording of the researchers, they do tread carefully around just how successful their tests actually were.  I do sense that the methods by which deceptive behavior was generated in the robots was not as complete a process by which humans use deception on a daily basis, and so was perhaps a more "hacked" behavior than being a result of complex cognitive processing.  If it were the case that deception was the result of some sophisticated upper level reasoning, the ability to mislead another robot would certainly not be the only interesting behavior observed by the researchers, and the means by which the behavior was generated would form a substantial leap forward in current AI progress.


That being said, it does not diminish the achievements of the group.  Anyone who's done any research in AI will tell you just how difficult it is to generate even simple behaviors in artificial agents, be they completely encased in a computer or embodied as they are in the relevant experiment.  So, at the very least, hat's off to that team for doing something pretty cool.


Back on topic though.  I've noticed a pattern in many AI researchers who take it upon themselves to at least consider the ethical ramifications of the work that they do every day.  Even those who do not speak or seem to concern themselves with issues of the ethical concerns for results in AI have, I'm willing to bet, at some point considered the complications of incorporating what could potentially be an artificial agent into a perhaps less than prepared society.


This pattern seems to be one of acknowledgement of the issues, but a fear/refusal to actually address them.  Consider Dr. Arkin's quote above where he "encourages discussion about the appropriateness of deceptive robots..."  He realizes at least on some level how complicated these issues are, but, like most researchers in the field, passes the buck to someone else.


I feel as though nobody wants to handle the sticky issue of ethics, rights, and other such abstract notions that people have been wrestling with for ages.  Nobody wants to handle this issue, even if it's their own creations that may someday render these concerns relevant to their society.  It's a complicated issue with no good guys, and no bad guys.  After all, at this point we have a lack of concrete examples to refer to to help guide our thoughts concerning ethics and AI.  We've never handled man-made "golems" that caused us an existential crisis about what it means to be endowed with rights and entitled to fair treatment because of our personhood.


So the fear and uneasiness with the issue is understandable.  We're scientists after all, not politicians.


At the same time, I don't believe that the questions of rights and personhood and other such quicksand concepts should be left to the politicians entirely.  They have their own concerns.  They have their own fanbase to appease, elections to win.  Talking a good game and truly understanding the game by playing in it are two different things.


We as AI researchers should begin to play a more active role in steering the treatment of what could someday come to be true AI agents.  If it does happen, that we do create AI systems on the level of humans in all regards (which, if you read one of my previous posts, I actually feel is impossible for reasons I can't quite logically formulate), then we will be responsible for our creations, our children, and ensuring the best course of action for any child is best when the input of the parent is taken into account.


It's a difficult issue, and I may actually be lighting the fire that burns me a bit by only pointing out the existence of clouds in the future while not really offering a starting point where which we'd build a shelter, but I feel it's time to start taking this issue to heart, even if it's hilariously premature to do so.  The issues at the heart of this form an incredibly important subset of philosophy and society's concerns in general, so it's not unreasonable to simply apply them to the possibility of many of the abstract examples tossed around for ages becoming a concrete point in debate being grounded in the results of AI research.


One of us should take that flag.


Maybe it should be Chomsky.  After all, this is his backyard.
 
Picture
I am an entire two weeks into my PhD here at RPI, learning to move and survive in academic circles I had only up until this point experienced only as an observer.  From my father to the university life at Michigan, there was always an insulation from the truths of the life that awaits anyone who intends to graduate school in the future, and perhaps as a natural next step for those who survive this, a position as a Professor at an institution of higher learning.


This is a culture within a culture within a culture, the Cognitive Science Department at RPI within the culture of academia, within the greater culture that is the United States.  It is complete with its own set of customs and traditions, a hierarchy of significance and respect that chains downwards from respected tenured senior faculty to us first year grad students still learning to strike out on our own.  It has its own language, complete with a whole new set of connotations few people outside these circles would find pragmatic towards everyday living.  "Funding" is no longer just a word for money, but a treasure chest that carries with it respect and the means by which one lives in this world.  "Emeritus," well, come on.  Most people outside academia don't even know what that means.  "Tenure" is not just a protection for being fired or laid off, but here it's a sign of esteem, a goal held in such high regard that being considered for it in and of itself is a mark of respect.


And these words aren't even of the technical variety.  Cognitive Science itself carries its own "buzz" words, words that anyone worth their salt in the field could, upon hearing the words utterance, fly off on some rambling tangent concerning the implications, connotations, and denotations of the term in ways seemingly irrelevant to anyone else in the room, save for those who already speak the language.


The culture is very unique, as is any culture.  Of course, many of the standards rules of the containing culture do apply still.  We still wash our hands when leaving the bathroom.  We still close the door on the way out of the room.  We still use a fork to pick up our food (although I'm starting to realize that depending on the food and the company, this is less and less strict than your parents would likely have you believe).  


The key to any cultural learning is immersion.  Hence, we neophytes attempt to absorb as much as we can (or at least we ought to).  Observe the way people interact, they way they talk and communicate, the way the approach each other and the situations they frequently find themselves entwined in.  You learn the customs through practice.  You learn the language through communicating (although this is, admittedly, one of the more nerve-wracking experiences a young graduate student seems to face).


One hopes that with enough learning and time, we'll feel less like interlopers and more like natives.


Funny thing about this culture, of course, is that there are very few, if any, people who could claim to be natives.  Nobody is born with a PhD, or even a PhD level understanding on any topic esoteric enough to be licensed as research field and thus capable of handing out Doctorates.


Thus, natives in the world of academia are illusions.


We are all interlopers in someone elses backyard.


For us in Cognitive Science, the backyard is probably Chomsky's.
 
Picture
I am currently having another bout with insomnia, my restless mind accelerating through a multitude of thoughts with little to no organization, rhyme, reason, or rhythm.


During this latest uncontrollable mental meandering, my thoughts turned to what it means to be in the field of cognitive science, and of course, A.I.  A conversation I had a few months ago with someone caused me to reconsider many of the motivations and aspects of the weight of my work, in ways I'm only now coming to fully comprehend.


Once upon a time, I dreamt of being one of the minds who gave birth to true, strong artificial intelligence.  I wanted to create an artificial mind that rivaled that of humans across all domains, to the point where the only thing that separated humans and artificial systems of intelligence would be the origin and biological makeup of the entity.


To me, the effects such a discovery would have on society had never truly struck a chord with me.  Despite my philosophical background, I had always considered the work of a scientist to be principally concerned with scientific progress, and the treatment and incorporation into society the work of...well, someone else.  After all, we as scientists have enough trouble as it is trying to understand the mechanisms of mind while simultaneously limited to using only our own mechanisms of mind.


Of course, societal impact of true AI had occurred to me.  But such things as treatment of these entities, systems of rights, and other such ethical concerns were something I felt perhaps the human race would cope with over time and accrued experience, with ample leeway for growing pains.


In short, societal implications of my work was not my concern.


My thoughts gradually changed when presented with a different view, a new reason for caution.


As it was discussed, it was made explicit to me a realization that anyone who's been in the field of AI for a reasonable amount of time by a person who was not in the field itself: the human mind is a wonderful thing, of great complexity and majesty.  There are things that occur within even the most basic of human thought processes that have proven to be extraordinarily resilient to even the most probing of minds and analysis.  To try to reduce the majesty of human cognition to a sequence of 1's and 0's is perhaps missing the point of what makes the human mind special.


Think of the human mind as a beautiful one-way mirror.  We can "peer" outwards from one side of this mirror, but looking into it from outside, we can only see a reflection of our own eyes staring at its surface, but never through it.  The majesty of human cognition is preserved by this one-way mirror, the ultimate treasure we cannot see (yet).  Majesty is preserved in mystery.


This was all well and good to me, although the weight of this fact didn't sink in until later.  What was especially disturbing to me at the time was the following point made: in trying to create artificial intelligence, we run the risk of shattering the one-way mirror, and destroying what makes the human experience special, magical, majestic.


Our meddling intellect misshapes the beauteous form of things.  We murder to dissect.
~William Wordsworth


Will we murder the beauty and mystery of the human mind in order to dissect it?  And for what gain?  So that we could have another feather to stick in our cap and scream out at the heavens "Look what man's intellect has discovered!  Truly, are we not Gods?"


It took many months for me to contemplate this.  It's to me a very weighty issue, one that strikes at the heart of my motivations for research.  While I didn't want to give up my passion for my research (where our passions lie is largely not up to me anyways), the doubts leaking into my mind wanted of attention, for they spoke to me at a level of truth I could never explicate with words.


My passion for A.I. research and my fear of shattering the mirror both exist within my mind now, and have largely found ways of coexisting within those limited confines.  While acceptance of the presence of both is certainly important to maintaining relevant aspects of my life, I have now been considering how they came to coexist together, my desires and my doubts in stark opposition to one another intuitively, yet standing together in my mind side by side.  What is the relation to one another that allows me to keep going with my work?


There is at the heart of my research a very strong notion, a very strong opinion: yes, the human experience and the mind are amazing, beautiful, a wonderful harmony of chaos and order mixed in ways we cannot even begin to comprehend.  Nor will we ever truly come to comprehend it.


The one-way mirror is unbreakable.  We can strike, pound, hammer with all our might, but the very way the mind is constructed is impervious to our attempts at penetration.  We are limited to the tools we are given, and these tools leave us poorly equipped to tear down the wall that separates our intellect from our minds.


Simply put, I am of the belief that it is impossible for us to truly understand what goes on in our minds to the level that would allow us to develop a complete theory of mind.


It is important for me to note here that this isn't to say we won't be able to create true A.I. someday.  I am simply of the belief that if we are successful in doing so, it won't be as a result of us understanding the mind and its majesty.  It will be done by circumventing this impossible problem, if at all.


So where does that leave me, a poor researcher in the field of cognitive science?  It leaves me freedom, and the power to pursue my research to the end, without the fear of shattering the one-way mirror and throwing the mystery of the mind to the winds.  For I can drive at my linguistics research, my philosophy research, my computer science research, all towards developing a model that would allow a system to understand natural language on some level, perhaps even a human's level, but no matter how deep I get, no matter how sophisticated my models become, they will only be able to stand outside that one-way mirror, and look in on what true majesty is like.


So do not worry (I tell myself constantly), the mystery of what makes human cognition spectacular is safe from our meddling intellects.  The beauteous form of it preserved by its very nature, a security by obscurity.


In closing, the most powerful moment of the conversation was an act.  It was an act that changed the way I thought, who I was, and who I was to become.  It was an act that carries more significance and weight than any infinite sequence of 0's and 1's could ever express.


For just one instant in time, I looked into the one-way mirror, and saw someone standing beside me, smiling back through the reflection.

 
So I was thinking about the nature of the research I'd like to do over the next few years (however long it takes for me to accomplish what it is I want to accomplish at the graduate student level, be it three or four or five years), and my mind turned to the way language functions between humans (as my thoughts tend to these days).  A very simple way for me to break down language was by dividing it into three components:
1. Syntax
2. Semantics
3. Pragmatics
This list is akin to the field of Semiotics, "the study of sign processes (semiosis), or signification and communication, signs and symbols (Wikipedia)."
In short, these three components all combine in a way to enable communication via the use of symbols and signs, and the meaning that these symbols and signs signify.  I will need to flesh out these ideas more, but I am curious about what theories of these interactions exist to represent meaning in natural language.  Perhaps an implementation of my own theory on Natural Language Semiotics in the cognitive architecture Polyscheme may be in order at some point in the near future...
 
Well, that's one heck of a mouthful for a title.  Let me explain it.


It's been theorized that the foundations for human language use are based on concepts that are intrinsically non-linguistic.  That is to say, the basis of language is not on the words uttered, written, or otherwise conveyed in some manner (although this being an integral part of language as we experience and know it), but rather on the words' reference to processes, states, and other abstract concepts concerning extra-linguistic information, where the word attachment to the experience is arbitrary (it could have just as easily developed in the English language that the label "dog" be attached to a motorized vehicle intended for personal transportation, a.k.a. what we call a "car" for instance).

That being said, let us consider the combinatoric nature of natural language syntax and its relation to a canonical form of meaning:
1. "I ate the bagel."
2. "The bagel was eaten by me."
3. "At some point in the past, I consumed a bagel."
4. * "I ate bagel."
... and so on and so forth.
note: "*" means sentence is ill-formed

Examining these sentences, you can see they are all different in one sense, yet identical in another.  They are different in that they use different words and in different order.  Sentence (4) is not even well-formed as a sentence, yet it is understandable to us native and/or relatively fluid speakers of English what the intended meaning of (4) is.  

At the same time, they are identical in the way that pragmatically, they all mean the same thing.  Something like:
<tense=past>
<action=eat>
<subject=me>
<directObject=bagel>


That is to say, there is a single canonical form to which all of the above sentences refer to simultaneously.  All of the sentences, with all of their lexical and syntactic differences accounted for, still mean essentially the same thing: Before the time of the utterance, I ate a bagel.


Thus, we can see that even with differences in syntactic structure between sentences, they can still mean the same thing.  This point I hope is beyond debate, as the previous examples should suffice to convince you of its truth (I hope...).


Here's where the water gets murkier.  I will in the next few posts argue shortly for a minimized set of native non-linguistic concepts that can account for the canonical form of sentences that is not wholly defined by syntactic and lexical structure in natural language use.  Then I will consider possible forms this set of native non-linguistic concepts that is the base of human natural language understanding through the yielded canonical form representation.  Once we have postulated a method of representing the canonical form of natural language, I will then consider the form of a possible interface that exists between the syntactic and lexical composition of a natural language utterance or set of utterances (for instance in dialogue and discourse).


What this hopefully should yield is the beginning of a research goal for the future, where I can continue to explore the nature of the syntax-semantics interface with an eye towards:
1. Natural language origins
2. Natural language use
3. Cognitive underpinnings of natural language understanding


Kind of a big question, huh.
Keep in touch.

 
Soon enough, the wait will be over, I can finally begin my graduate studies at RPI. 
Although I sometimes feel anything but prepared, there really isn't much time for doubt or fear to leak into my mind.


Soon enough, I'll be running in directions I've never been.  It will be difficult, challenging, arduous, <insert synonym for difficult here>.  Within a few days, I will be far away, fighting my way for something great and worthwhile.


I have many goals out there I want to accomplish in my time with Prof. Cassimatis' HLI lab, and I intend to get running as soon as possible.  Among the goals involved are making a meaningful contribution to the lab and the general body of knowledge in the field of cognitive science (how's that for a sound bite?), making a good impression on those who are relevant to my future, and perhaps shooting for finishing my PhD in 3-3.5 years.  As tough as that last point seems at first glance, I know that it's possible (it's mathematically possible on paper, and it's been done by others before).  It's possible if I go in with a plan, focus, and consistency.


The run to the PhD is not a sprint, but a marathon.  Hopefully I can run it in pretty near record time, and still enjoy some of the scenery along the way.  


Wish me luck!